1,806 research outputs found
Galaxy-Galaxy Flexion: Weak Lensing to Second Order
In this paper, we develop a new gravitational lensing inversion technique.
While traditional approaches assume that the lensing field varies little across
a galaxy image, we note that this variation in the field can give rise to a
``Flexion'' or bending of a galaxy image, which may then be used to detect a
lensing signal with increased signal to noise. Since the significance of the
Flexion signal increases on small scales, this is ideally suited to
galaxy-galaxy lensing. We develop an inversion technique based on the
``Shapelets'' formalism of Refregier (2003). We then demonstrate the proof of
this concept by measuring a Flexion signal in the Deep Lens Survey. Assuming an
intrinsically isothermal distribution, we find from the Flexion signal alone a
velocity width of v_c=221\pm 12 km/s for lens galaxies of r < 21.5, subject to
uncertainties in the intrinsic Flexion distribution.Comment: 11 pages, Latex, 4 figures. Accepted by ApJ, changes include revision
of errors from previous draf
Cross-correlation cosmography with HI intensity mapping
The cross-correlation of a foreground density field with two different
background convergence fields can be used to measure cosmographic distance
ratios and constrain dark energy parameters. We investigate the possibility of
performing such measurements using a combination of optical galaxy surveys and
HI intensity mapping surveys, with emphasis on the performance of the planned
Square Kilometre Array (SKA). Using HI intensity mapping to probe the
foreground density tracer field and/or the background source fields has the
advantage of excellent redshift resolution and a longer lever arm achieved by
using the lensing signal from high redshift background sources. Our results
show that, for our best SKA-optical configuration of surveys, a constant
equation of state for dark energy can be constrained to for a sky
coverage and assuming a prior
for the dark energy density parameter. We also show that using the CMB as the
second source plane is not competitive, even when considering a COrE-like
satellite.Comment: 10 pages, 8 figures, 1 table; version accepted for publication in
Physical Review
Viewpoint | Personal Data and the Internet of Things: It is time to care about digital provenance
The Internet of Things promises a connected environment reacting to and
addressing our every need, but based on the assumption that all of our
movements and words can be recorded and analysed to achieve this end.
Ubiquitous surveillance is also a precondition for most dystopian societies,
both real and fictional. How our personal data is processed and consumed in an
ever more connected world must imperatively be made transparent, and more
effective technical solutions than those currently on offer, to manage personal
data must urgently be investigated.Comment: 3 pages, 0 figures, preprint for Communication of the AC
Learning from the machine: interpreting machine learning algorithms for point- and extended- source classification
We investigate star-galaxy classification for astronomical surveys in the
context of four methods enabling the interpretation of black-box machine
learning systems. The first is outputting and exploring the decision boundaries
as given by decision tree based methods, which enables the visualization of the
classification categories. Secondly, we investigate how the Mutual Information
based Transductive Feature Selection (MINT) algorithm can be used to perform
feature pre-selection. If one would like to provide only a small number of
input features to a machine learning classification algorithm, feature
pre-selection provides a method to determine which of the many possible input
properties should be selected. Third is the use of the tree-interpreter package
to enable popular decision tree based ensemble methods to be opened,
visualized, and understood. This is done by additional analysis of the tree
based model, determining not only which features are important to the model,
but how important a feature is for a particular classification given its value.
Lastly, we use decision boundaries from the model to revise an already existing
method of classification, essentially asking the tree based method where
decision boundaries are best placed and defining a new classification method.
We showcase these techniques by applying them to the problem of star-galaxy
separation using data from the Sloan Digital Sky Survey (hereafter SDSS). We
use the output of MINT and the ensemble methods to demonstrate how more complex
decision boundaries improve star-galaxy classification accuracy over the
standard SDSS frames approach (reducing misclassifications by up to
). We then show how tree-interpreter can be used to explore how
relevant each photometric feature is when making a classification on an object
by object basis.Comment: 12 pages, 8 figures, 8 table
Testing Emergent Gravity on Galaxy Cluster Scales
Verlinde's theory of Emergent Gravity (EG) describes gravity as an emergent
phenomenon rather than a fundamental force. Applying this reasoning in de
Sitter space leads to gravity behaving differently on galaxy and galaxy cluster
scales; this excess gravity might offer an alternative to dark matter. Here we
test these ideas using the data from the Coma cluster and from 58 stacked
galaxy clusters. The X-ray surface brightness measurements of the clusters at
along with the weak lensing data are used to test the theory.
We find that the simultaneous EG fits of the X-ray and weak lensing datasets
are significantly worse than those provided by General Relativity (with cold
dark matter). For the Coma cluster, the predictions from Emergent Gravity and
General Relativity agree in the range of 250 - 700 kpc, while at around 1 Mpc
scales, EG total mass predictions are larger by a factor of 2. For the cluster
stack the predictions are only in good agreement at around the 1 - 2 Mpc
scales, while for Mpc EG is in strong tension with the data.
According to the Bayesian information criterion analysis, GR is preferred in
all tested datasets; however, we also discuss possible modifications of EG that
greatly relax the tension with the data.Comment: 19 pages, 5 figures, 5 tables, accepted for publication on JCA
Testing Bekenstein's Relativistic MOND gravity with Lensing Data
We propose to use multiple-imaged gravitational lenses to set limits on
gravity theories without dark matter, specificly TeVeS (Bekenstein 2004), a
theory which is consistent with fundamental relativistic principles and the
phenomenology of MOdified Newtonian Dynamics (MOND) theory. After setting the
framework for lensing and cosmology, we derive analytically the deflection
angle for the point lens and the Hernquist galaxy profile, and fit
galaxy-quasar lenses in the CASTLES sample. We do this with three methods,
fitting the observed Einstein ring sizes, the image positions, or the flux
ratios. In all cases we consistently find that stars in galaxies in MOND/TeVeS
provide adequate lensing. Bekenstein's toy function provides more
efficient lensing than the standard MOND function. But for a handful of
lenses [indicated in Table 2,3, fig 16] a good fit would require a lens mass
orders of magnitude larger/smaller than the stellar mass derived from
luminosity unless the modification function and modification scale
for the universal gravity were allowed to be very different from what spiral
galaxy rotation curves normally imply. We discuss the limitation of present
data and summarize constraints on the MOND function. We also show that
the simplest TeVeS "minimal-matter" cosmology, a baryonic universe with a
cosmological constant, can fit the distance-redshift relation from the
supernova data, but underpredicts the sound horizon size at the last
scattering. We conclude that lensing is a promising approach to differentiate
laws of gravity (see also astro-ph/0512425).Comment: reduced to 17p, 16 figs, discussed cosmology and constraints on
mu-function, MNRAS accepte
A Comparison of Price Differentials in the Chain and Independent Grocery Stores of Logan, Utah
This study is the result of two separate surveys of the retail grocery stores in Logan, Utah. The primary purpose of these surveys was to make a detailed study of price differentials as they exist between the stores of different kind, class, location and size. Logan was chosen for this survey for a number of reasons: (1) It is typical of many Rocky Mountain cities for size. (2) There is no one industry that completely dominates the economy of the city. (3) There are sufficient stores in kind and number to give the necessary data. (4) Besides the local independent stores that are in operation, there is a national chain system represented by Safeway Stores, Inc. and a small state chain represented by the American Food Store. The problem was to survey the Logan city grocery stores for price data on food commodities in sufficient number to indicated the price differentials that exist within the stores. To facilitate this the local stores were segregated into four distinct classes, namely: (1) national chains, (2) a small state chain, (3) large city independents, and (4) neighborhood stores. The prices found in the chain systems were used as a bases for the comparisons that are made between both the chain stores and the two groups of independent stores. The study has proven valuable in that price differentials have been discovered to exist between the various commodity groups, as well as within the individual items. These differentials have varied with the commodity and within the different classifications of stores. Many of the pricing policies that are being practiced in the Logan stores are representative of market conditions as they exist in other cities within the Rocky area and might justifiably be applied to these communities with the expectations that similar results would occur
- …